66 research outputs found

    Improved Stratified Control for Hexapod Robots and Object Manipulation with Finger Relocation

    Get PDF
    The paper deals with the motion design of legged robots and dextrous hands. We show the possibilities and limitation of conventional stratified control approach through the relative simple example of hexapod robot and offer some proposals for a more robust motion planning solution. The precision of the algorithms was improved by step length modification and the applicability was increased by time scaling. The developed software is based on symbolic computation. On the other hand, our fundamental goal is to provide a powerful basic concept for object manipulation with finger relocation in the context of stratified control as an extension of earlier works. The concept focuses on the finger gaiting manipulation (based on finger relocation) to gain some attributes of the nonsmooth object

    Application of Modified PageRank Algorithm for Anomaly Detection in Movements of Older Adults

    Get PDF
    It is a well-known statistic that the percentage of our older adult population will globally surpass the other age groups. A majority of the elderly would still prefer to keep an active life style. In support of this life style, various monitoring systems are being designed and deployed to have a seamless integration with the daily living activities of the older adults while preserving various levels of their privacy. Motion tracking is one of these health monitoring systems. When properly designed, deployed, integrated, and analyzed, they can be used to assist in determining some onsets of anomalies in the health of elderly at various levels of their Movements and Activities of Daily Living (MADL). This paper explores how the framework of the PageRank algorithm can be extended for monitoring the global movement patterns of older adults at their place of residence. Through utilization of an existing dataset, the paper shows how the movement patterns between various rooms can be represented as a directed graph with weighted edges. To demonstrate how PageRank can be utilized, a base graph representing a normal pattern can be defined as what can be used for further anomaly detection (e.g., at some instances of observation the measured movement pattern deviates from what is previously defined as a normal pattern). It is shown how the PageRank algorithm can detect simulated change in the pattern of motion when compared with the base-line normal pattern. This feature can offer a practical approach for detecting anomalies in movement patterns associated with older adults in their own place of residence and in support of aging in place paradigm

    Passive Observer of Activities for Aging in Place Using a Network of RGB-D Sensors

    Get PDF
    Aging in place is a notion which supports the independent living of older adults at their own place of residence for as long as possible. To support this alternative living which can be in contrast to various other types of assisted living options, modes of monitoring technology need to be explored and studied in order to determine a balance between the preservation of privacy and adequacy of sensed information for better estimation and visualization of movements and activities. In this paper, we explore such monitoring paradigm on how a network of RGB-D sensors can be utilized for this purpose. This type of sensor offers both visual and depth sensing modalities from the scene where the information can be fused and coded for better protection of privacy. For this purpose, we introduce the novel notion of passive observer. This observer is only triggered by detecting the absence of movements of older adults in the scene. This is accomplished by classifying and localizing objects in the monitoring scene from both before and after the detection of movements. A deep learning tool is utilized for visual classification of known objects in the physical scene followed by virtual reality reconstructing of the scene where the shape and location of objects are recreated. Such reconstruction can be used as a visual summary in order to identify objects which were handled by an older adult in-between observation. The simplified virtual scene can be used, for example, by caregivers or monitoring personnel in order to assist in detecting any anomalies. This virtual visualization can offer a high level of privacy protection without having any direct visual access to the monitoring scene. In addition, using the scene graph representation, an automatic decision-making tool is proposed where spatial relationships between the objects can be used to estimate the expected activities. The results of this paper are demonstrated through two case studies

    Hand Motion and Posture Recognition in a Network of Calibrated Cameras

    Get PDF
    This paper presents a vision-based approach for hand gesture recognition which combines both trajectory and hand posture recognition. The hand area is segmented by fixed-range CbCr from cluttered and moving backgrounds and tracked by Kalman Filter. With the tracking results of two calibrated cameras, the 3D hand motion trajectory can be reconstructed. It is then modeled by dynamic movement primitives and a support vector machine is trained for trajectory recognition. Scale-invariant feature transform is employed to extract features on segmented hand postures, and a novel strategy for hand posture recognition is proposed. A gesture vector is introduced to recognize hand gesture as an entirety which combines the recognition results of motion trajectory and hand postures where a support vector machine is trained for gesture recognition based on gesture vectors

    Toward Design of a Drip-Stand Patient Follower Robot

    Get PDF
    A person following robot is an application of service robotics that primarily focuses on human-robot interaction, for example, in security and health care. This paper explores some of the design and development challenges of a patient follower robot. Our motivation stemmed from common mobility challenges associated with patients holding on and pulling the medical drip stand. Unlike other designs for person following robots, the proposed design objectives need to preserve as much as patient privacy and operational challenges in the hospital environment. We placed a single camera closer to the ground, which can result in a narrower field of view to preserve patient privacy. Through a unique design of artificial markers placed on various hospital clothing, we have shown how the visual tracking algorithm can determine the spatial location of the patient with respect to the robot. The robot control algorithm is implemented in three parts: (a) patient detection; (b) distance estimation; and (c) trajectory controller. For patient detection, the proposed algorithm utilizes two complementary tools for target detection, namely, template matching and colour histogram comparison. We applied a pinhole camera model for the estimation of distance from the robot to the patient. We proposed a novel movement trajectory planner to maintain the dynamic tipping stability of the robot by adjusting the peak acceleration. The paper further demonstrates the practicality of the proposed design through several experimental case studies

    Critical Overview of Visual Tracking with Kernel Correlation Filter

    Get PDF
    With the development of new methodologies for faster training on datasets, there is a need to provide an in-depth explanation of the workings of such methods. This paper attempts to provide an understanding for one such correlation filter-based tracking technology, Kernelized Correlation Filter (KCF), which uses implicit properties of tracked images (circulant matrices) for training and tracking in real-time. It is unlike deep learning, which is data intensive. KCF uses implicit dynamic properties of the scene and movements of image patches to form an efficient representation based on the circulant structure for further processing, using properties such as diagonalizing in the Fourier domain. The computational efficiency of KCF, which makes it ideal for low-power heterogeneous computational processing technologies, lies in its ability to compute data in high-dimensional feature space without explicitly invoking the computation on this space. Despite its strong practical potential in visual tracking, there is a need for an in-depth critical understanding of the method and its performance, which this paper aims to provide. Here we present a survey of KCF and its method along with an experimental study that highlights its novel approach and some of the future challenges associated with this method through observations on standard performance metrics in an effort to make the algorithm easy to investigate. It further compares the method against the current public benchmarks such as SOTA on OTB-50, VOT-2015, and VOT-2019. We observe that KCF is a simple-to-understand tracking algorithm that does well on popular benchmarks and has potential for further improvement. The paper aims to provide researchers a base for understanding and comparing KCF with other tracking technologies to explore the possibility of an improved KCF tracker

    Multimodal Sensing Interface for Haptic Interaction

    Get PDF
    This paper investigates the integration of a multimodal sensing system for exploring limits of vibrato tactile haptic feedback when interacting with 3D representation of real objects. In this study, the spatial locations of the objects are mapped to the work volume of the user using a Kinect sensor. The position of the user’s hand is obtained using the marker-based visual processing. The depth information is used to build a vibrotactile map on a haptic glove enhanced with vibration motors. The users can perceive the location and dimension of remote objects by moving their hand inside a scanning region. A marker detection camera provides the location and orientation of the user’s hand (glove) to map the corresponding tactile message. A preliminary study was conducted to explore how different users can perceive such haptic experiences. Factors such as total number of objects detected, object separation resolution, and dimension-based and shape-based discrimination were evaluated. The preliminary results showed that the localization and counting of objects can be attained with a high degree of success. The users were able to classify groups of objects of different dimensions based on the perceived haptic feedback

    Experimental Study of a Deep-Learning RGB-D Tracker for Virtual Remote Human Model Reconstruction

    Get PDF
    Tracking movements of the body in a natural living environment of a person is a challenging undertaking. Such tracking information can be used as a part of detecting any onsets of anomalies in movement patterns or as a part of a remote monitoring environment. The tracking information can be mapped and visualized using a virtual avatar model of the tracked person. This paper presents an initial novel experimental study of using a commercially available deep-learning body tracking system based on an RGB-D sensor for virtual human model reconstruction. We carried out our study in an indoor environment under natural conditions. To study the performance of the tracker, we experimentally study the output of the tracker which is in the form of a skeleton (stick-figure) data structure under several conditions in order to observe its robustness and identify its drawbacks. In addition, we show and study how the generic model can be mapped for virtual human model reconstruction. It was found that the deep-learning tracking approach using an RGB-D sensor is susceptible to various environmental factors which result in the absence and presence of noise in estimating the resulting locations of skeleton joints. This as a result introduces challenges for further virtual model reconstruction. We present an initial approach for compensating for such noise resulting in a better temporal variation of the joint coordinates in the captured skeleton data. We explored how the extracted joint position information of the skeleton data can be used as a part of the virtual human model reconstruction

    Deep Attention Models for Human Tracking Using RGBD

    Get PDF
    Visual tracking performance has long been limited by the lack of better appearance models. These models fail either where they tend to change rapidly, like in motion-based tracking, or where accurate information of the object may not be available, like in color camouflage (where background and foreground colors are similar). This paper proposes a robust, adaptive appearance model which works accurately in situations of color camouflage, even in the presence of complex natural objects. The proposed model includes depth as an additional feature in a hierarchical modular neural framework for online object tracking. The model adapts to the confusing appearance by identifying the stable property of depth between the target and the surrounding object(s). The depth complements the existing RGB features in scenarios when RGB features fail to adapt, hence becoming unstable over a long duration of time. The parameters of the model are learned efficiently in the Deep network, which consists of three modules: (1) The spatial attention layer, which discards the majority of the background by selecting a region containing the object of interest; (2) the appearance attention layer, which extracts appearance and spatial information about the tracked object; and (3) the state estimation layer, which enables the framework to predict future object appearance and location. Three different models were trained and tested to analyze the effect of depth along with RGB information. Also, a model is proposed to utilize only depth as a standalone input for tracking purposes. The proposed models were also evaluated in real-time using KinectV2 and showed very promising results. The results of our proposed network structures and their comparison with the state-of-the-art RGB tracking model demonstrate that adding depth significantly improves the accuracy of tracking in a more challenging environment (i.e., cluttered and camouflaged environments). Furthermore, the results of depth-based models showed that depth data can provide enough information for accurate tracking, even without RGB information

    Deep representation learning: Fundamentals, Perspectives, Applications, and Open Challenges

    Full text link
    Machine Learning algorithms have had a profound impact on the field of computer science over the past few decades. These algorithms performance is greatly influenced by the representations that are derived from the data in the learning process. The representations learned in a successful learning process should be concise, discrete, meaningful, and able to be applied across a variety of tasks. A recent effort has been directed toward developing Deep Learning models, which have proven to be particularly effective at capturing high-dimensional, non-linear, and multi-modal characteristics. In this work, we discuss the principles and developments that have been made in the process of learning representations, and converting them into desirable applications. In addition, for each framework or model, the key issues and open challenges, as well as the advantages, are examined
    • …
    corecore